无监督的视频对象细分旨在将视频中的目标对象细分为初始框架中没有地面真相掩码。这项具有挑战性的任务需要在视频序列中提取最突出的常见对象的功能。可以通过使用运动信息(例如光流)来解决这个困难,但是仅使用相邻帧之间的信息会导致遥远帧与性能差的连通性差。为了解决这个问题,我们提出了一种新颖的原型内存网络体系结构。提出的模型通过从输入RGB图像和光流图中提取基于超类的组件原型来有效提取RGB和运动信息。此外,该模型基于自学习算法在每个帧中的组件原型评分,并自适应地存储最有用的原型,并放弃过时的原型。我们使用内存库中的原型来预测下一个查询帧掩模,从而增强了远处框架之间的关联以帮助进行准确的掩码预测。我们的方法在三个数据集上进行了评估,从而实现了最先进的性能。我们通过各种消融研究证明了所提出的模型的有效性。
translated by 谷歌翻译
无监督的视频对象分割(VOS)旨在在像素级别的视频序列中检测最显着的对象。在无监督的VO中,大多数最先进的方法除了外观提示外,还利用从光流图获得的运动提示来利用与背景相比,显着物体通常具有独特运动的属性。但是,由于它们过于依赖运动提示,在某些情况下可能是不可靠的,因此它们无法实现稳定的预测。为了减少现有两流VOS方法的这种运动依赖性,我们提出了一个新型的运动 - 选项网络,该网络可选地利用运动提示。此外,为了充分利用并非总是需要运动网络的属性,我们引入了协作网络学习策略。在所有公共基准数据集中,我们提出的网络以实时推理速度提供最先进的性能。
translated by 谷歌翻译
特征相似性匹配将参考框架的信息传输到查询框架,是半监视视频对象分割中的关键组件。如果采用了汇总匹配,则背景干扰器很容易出现并降低性能。徒匹配机制试图通过限制要传输到查询框架的信息的量来防止这种情况,但是有两个局限性:1)由于在测试时转换为两种匹配,因此无法完全利用过滤匹配的匹配; 2)搜索最佳超参数需要测试时间手动调整。为了在确保可靠的信息传输的同时克服这些局限性,我们引入了均衡的匹配机制。为了防止参考框架信息过于引用,通过简单地将SoftMax操作与查询一起应用SoftMax操作,对查询框架的潜在贡献得到了均等。在公共基准数据集上,我们提出的方法与最先进的方法达到了可比的性能。
translated by 谷歌翻译
RGB-D显着对象检测(SOD)最近引起了人们的关注,因为它是各种视觉任务的重要预处理操作。但是,尽管基于深度学习的方法取得了进步,但由于RGB图像与深度图和低质量深度图之间的较大域间隙,RGB-D SOD仍然具有挑战性。为了解决这个问题,我们提出了一个新型的超像素原型采样网络(SPSN)体系结构。所提出的模型将输入RGB图像和深度映射分为组件超级像素,以生成组件原型。我们设计了一个原型采样网络,因此网络仅采样对应于显着对象的原型。此外,我们提出了一个Reliance选择模块,以识别每个RGB和深度特征图的质量,并根据其可靠性成比例地适应它们。所提出的方法使模型可靠地到RGB图像和深度图之间的不一致之处,并消除了非偏好对象的影响。我们的方法在五个流行的数据集上进行了评估,从而实现了最先进的性能。我们通过比较实验证明了所提出的方法的有效性。
translated by 谷歌翻译
半监督视频对象细分(VOS)旨在密集跟踪视频中的某些指定对象。该任务中的主要挑战之一是存在与目标对象相似的背景干扰物的存在。我们提出了三种抑制此类干扰因素的新型策略:1)一种时空多元化的模板构建方案,以获得目标对象的广义特性; 2)可学习的距离得分函数,可通过利用两个连续帧之间的时间一致性来排除空间距离的干扰因素; 3)交换和连接的扩展通过提供包含纠缠对象的训练样本来迫使每个对象具有独特的功能。在所有公共基准数据集中,即使是实时性能,我们的模型也与当代最先进的方法相当。定性结果还证明了我们的方法优于现有方法。我们认为,我们的方法将被广泛用于未来的VOS研究。
translated by 谷歌翻译
The automated segmentation and tracking of macrophages during their migration are challenging tasks due to their dynamically changing shapes and motions. This paper proposes a new algorithm to achieve automatic cell tracking in time-lapse microscopy macrophage data. First, we design a segmentation method employing space-time filtering, local Otsu's thresholding, and the SUBSURF (subjective surface segmentation) method. Next, the partial trajectories for cells overlapping in the temporal direction are extracted in the segmented images. Finally, the extracted trajectories are linked by considering their direction of movement. The segmented images and the obtained trajectories from the proposed method are compared with those of the semi-automatic segmentation and manual tracking. The proposed tracking achieved 97.4% of accuracy for macrophage data under challenging situations, feeble fluorescent intensity, irregular shapes, and motion of macrophages. We expect that the automatically extracted trajectories of macrophages can provide pieces of evidence of how macrophages migrate depending on their polarization modes in the situation, such as during wound healing.
translated by 谷歌翻译
Data-centric AI has shed light on the significance of data within the machine learning (ML) pipeline. Acknowledging its importance, various research and policies are suggested by academia, industry, and government departments. Although the capability of utilizing existing data is essential, the capability to build a dataset has become more important than ever. In consideration of this trend, we propose a "Data Management Operation and Recipes" that will guide the industry regardless of the task or domain. In other words, this paper presents the concept of DMOps derived from real-world experience. By offering a baseline for building data, we want to help the industry streamline its data operation optimally.
translated by 谷歌翻译
According to the rapid development of drone technologies, drones are widely used in many applications including military domains. In this paper, a novel situation-aware DRL- based autonomous nonlinear drone mobility control algorithm in cyber-physical loitering munition applications. On the battlefield, the design of DRL-based autonomous control algorithm is not straightforward because real-world data gathering is generally not available. Therefore, the approach in this paper is that cyber-physical virtual environment is constructed with Unity environment. Based on the virtual cyber-physical battlefield scenarios, a DRL-based automated nonlinear drone mobility control algorithm can be designed, evaluated, and visualized. Moreover, many obstacles exist which is harmful for linear trajectory control in real-world battlefield scenarios. Thus, our proposed autonomous nonlinear drone mobility control algorithm utilizes situation-aware components those are implemented with a Raycast function in Unity virtual scenarios. Based on the gathered situation-aware information, the drone can autonomously and nonlinearly adjust its trajectory during flight. Therefore, this approach is obviously beneficial for avoiding obstacles in obstacle-deployed battlefields. Our visualization-based performance evaluation shows that the proposed algorithm is superior from the other linear mobility control algorithms.
translated by 谷歌翻译
This paper proposes a new regularization algorithm referred to as macro-block dropout. The overfitting issue has been a difficult problem in training large neural network models. The dropout technique has proven to be simple yet very effective for regularization by preventing complex co-adaptations during training. In our work, we define a macro-block that contains a large number of units from the input to a Recurrent Neural Network (RNN). Rather than applying dropout to each unit, we apply random dropout to each macro-block. This algorithm has the effect of applying different drop out rates for each layer even if we keep a constant average dropout rate, which has better regularization effects. In our experiments using Recurrent Neural Network-Transducer (RNN-T), this algorithm shows relatively 4.30 % and 6.13 % Word Error Rates (WERs) improvement over the conventional dropout on LibriSpeech test-clean and test-other. With an Attention-based Encoder-Decoder (AED) model, this algorithm shows relatively 4.36 % and 5.85 % WERs improvement over the conventional dropout on the same test sets.
translated by 谷歌翻译
Affect understanding capability is essential for social robots to autonomously interact with a group of users in an intuitive and reciprocal way. However, the challenge of multi-person affect understanding comes from not only the accurate perception of each user's affective state (e.g., engagement) but also the recognition of the affect interplay between the members (e.g., joint engagement) that presents as complex, but subtle, nonverbal exchanges between them. Here we present a novel hybrid framework for identifying a parent-child dyad's joint engagement by combining a deep learning framework with various video augmentation techniques. Using a dataset of parent-child dyads reading storybooks together with a social robot at home, we first train RGB frame- and skeleton-based joint engagement recognition models with four video augmentation techniques (General Aug, DeepFake, CutOut, and Mixed) applied datasets to improve joint engagement classification performance. Second, we demonstrate experimental results on the use of trained models in the robot-parent-child interaction context. Third, we introduce a behavior-based metric for evaluating the learned representation of the models to investigate the model interpretability when recognizing joint engagement. This work serves as the first step toward fully unlocking the potential of end-to-end video understanding models pre-trained on large public datasets and augmented with data augmentation and visualization techniques for affect recognition in the multi-person human-robot interaction in the wild.
translated by 谷歌翻译